10 research outputs found

    FPGA Implementation of An Event-driven Saliency-based Selective Attention Model

    Full text link
    Artificial vision systems of autonomous agents face very difficult challenges, as their vision sensors are required to transmit vast amounts of information to the processing stages, and to process it in real-time. One first approach to reduce data transmission is to use event-based vision sensors, whose pixels produce events only when there are changes in the input. However, even for event-based vision, transmission and processing of visual data can be quite onerous. Currently, these challenges are solved by using high-speed communication links and powerful machine vision processing hardware. But if resources are limited, instead of processing all the sensory information in parallel, an effective strategy is to divide the visual field into several small sub-regions, choose the region of highest saliency, process it, and shift serially the focus of attention to regions of decreasing saliency. This strategy, commonly used also by the visual system of many animals, is typically referred to as ``selective attention''. Here we present a digital architecture implementing a saliency-based selective visual attention model for processing asynchronous event-based sensory information received from a DVS. For ease of prototyping, we use a standard digital design flow and map the architecture on an FPGA. We describe the architecture block diagram highlighting the efficient use of the available hardware resources demonstrated through experimental results exploiting a hardware setup where the FPGA interfaced with the DVS camera.Comment: 5 pages, 5 figure

    Instantaneous Stereo Depth Estimation of Real-World Stimuli with a Neuromorphic Stereo-Vision Setup

    Full text link
    The stereo-matching problem, i.e., matching corresponding features in two different views to reconstruct depth, is efficiently solved in biology. Yet, it remains the computational bottleneck for classical machine vision approaches. By exploiting the properties of event cameras, recently proposed Spiking Neural Network (SNN) architectures for stereo vision have the potential of simplifying the stereo-matching problem. Several solutions that combine event cameras with spike-based neuromorphic processors already exist. However, they are either simulated on digital hardware or tested on simplified stimuli. In this work, we use the Dynamic Vision Sensor 3D Human Pose Dataset (DHP19) to validate a brain-inspired event-based stereo-matching architecture implemented on a mixed-signal neuromorphic processor with real-world data. Our experiments show that this SNN architecture, composed of coincidence detectors and disparity sensitive neurons, is able to provide a coarse estimate of the input disparity instantaneously, thereby detecting the presence of a stimulus moving in depth in real-time

    FPGA Implementation of An Event-driven Saliency-based Selective Attention Model

    Full text link

    A spike-based neuromorphic stereo architecture for active vision

    Full text link
    The problem of finding stereo correspondences in binocular vision is solved effortlessly in nature and yet is still a critical bottleneck for artificial machine vision systems. As temporal information is a crucial feature in this process, the advent of event-based vision sensors and dedicated event-based processors promises to offer an effective approach to solve stereo-matching. Indeed, event-based neuromorphic hardware provides an optimal substrate for biologically-inspired, fast, asynchronous computation, that can make explicit use of precise temporal coincidences. Here we present an event-based stereo-vision system that fully leverages the advantages of brain-inspired neuromorphic computing hardware by interfacing event-based vision sensors to an event-based mixed-signal analog/digital neuromorphic processor. We describe the multi-chip sensory-processing setup developed and demonstrate a proof of concept implementation of cooperative stereo-matching that can be used to build brain-inspired active vision systems

    Discrimination of EMG signals using a neuromorphic implementation of a spiking neural network

    Full text link
    An accurate description of muscular activity plays an important role in the clinical diagnosis and rehabilitation research. The electromyography (EMG) is the most used technique to make accurate descriptions of muscular activity. The EMG is associated with the electrical changes generated by the activity of the motor neurons. Typically, to decode the muscular activation during different movements, a large number of individual motor neurons are monitored simultaneously, producing large amounts of data to be transferred and processed by the computing devices. In this paper, we follow an alternative approach that can be deployed locally on the sensor side. We propose a neuromorphic implementation of a spiking neural network (SNN) to extract spatio-temporal information of EMG signals locally and classify hand gestures with very low power consumption. We present experimental results on the input data stream using a mixed-signal analog/digital neuromorphic processor. We performed a thorough investigation on the performance of the SNN implemented on the chip, by: first, calculating PCA on the activity of the silicon neurons at the input and the hidden layers to show how the network helps in separating the samples of different classes; second, performing classification of the data using state-of-the-art SVM and logistic regression methods and a hardware-friendly spike-based read-out. The traditional algorithm achieved a classification rate of 84% and 81%, respectively, and the spiking learning method achieved 74%. The power consumption of the SNN is 0.05 mW, showing the potential of this approach for ultra-low power processing

    A Spike-Based Neuromorphic Architecture of Stereo Vision

    Get PDF
    The problem of finding stereo correspondences in binocular vision is solved effortlessly in nature and yet it is still a critical bottleneck for artificial machine vision systems. As temporal information is a crucial feature in this process, the advent of event-based vision sensors and dedicated event-based processors promises to offer an effective approach to solving the stereo matching problem. Indeed, event-based neuromorphic hardware provides an optimal substrate for fast, asynchronous computation, that can make explicit use of precise temporal coincidences. However, although several biologically-inspired solutions have already been proposed, the performance benefits of combining event-based sensing with asynchronous and parallel computation are yet to be explored. Here we present a hardware spike-based stereo-vision system that leverages the advantages of brain-inspired neuromorphic computing by interfacing two event-based vision sensors to an event-based mixed-signal analog/digital neuromorphic processor. We describe a prototype interface designed to enable the emulation of a stereo-vision system on neuromorphic hardware and we quantify the stereo matching performance with two datasets. Our results provide a path toward the realization of low-latency, end-to-end event-based, neuromorphic architectures for stereo vision

    Fall detection with event-based data: A case study

    No full text
    Fall detection systems are relevant in our aging society aiming to support efforts towards reducing the impact of accidental falls. However, current solutions lack the ability to combine low-power consumption, privacy protection, low latency response, and low payload. In this work, we address this gap through a comparative analysis of the trade-off between effectiveness and energy consumption by comparing a Recurrent Spiking Neural Network (RSNN) with a Long Short-Term Memory (LSTM) and a Convolutional Neural Network (CNN). By leveraging two pre-existing RGB datasets and an event-camera simulator, we generated event data by converting intensity frames into event streams. Thus, we could harness the salient features of event-based data and analyze their benefits when combined with RSNNs and LSTMs. The compared approaches are evaluated on two data sets collected from a single subject; one from a camera attached to the neck (N-data) and the other one attached to the waist (W-data). Each data set contains 469 video samples, of which 213 are four types of fall examples, and the rest are nine types of non-fall daily activities. Compared to the CNN, which operates on the high-resolution RGB frames, the RSNN requires 200× less trainable parameters. However, the CNN outperforms the RSNN by 23.7 and 17.1% points for W- and N-data, respectively. Compared to the LSTM, which operates on event-based input, the RSNN requires 5× less trainable parameters and 2000× less MAC operations while exhibiting a 1.9 and 8.7% points decrease in accuracy for W- and N-data, respectively. Overall, our results show that the event-based data preserves enough information to detect falls. Our work paves the way to the realization of high-energy efficient fall detection systems

    Closed-Loop Spiking Control on a Neuromorphic Processor Implemented on the iCub

    Full text link
    Neuromorphic engineering promises the deployment of low latency, adaptive and low power systems that can lead to the design of truly autonomous artificial agents. However, many building blocks for developing a fully neuromorphic artificial agent are still missing. While neuromorphic sensing, perception, and decision-making building blocks are quite mature, the ones for motor control and actuation are lagging behind. In this paper we present a closed-loop motor controller implemented on a mixed-signal analog/digital neuromorphic processor which emulates a spiking neural network that continuously calculates an error signal from the desired target and the feedback signals. The system uses population coding and recurrent Winner-Take-All networks to encode the signals robustly. Recurrent connections within each population are used to speed up the convergence, decrease the effect of mismatch and improve selectivity. The error signal computed in this way is then fed into three additional populations of spiking neurons which produce the proportional, integral and derivative terms of classical controllers exploiting the temporal dynamics of the network synapses and neurons. To validate this approach we interfaced this neuromorphic motor controller with an iCub robot simulator. We tested our spiking controller in a single joint control task for the robot head yaw. We demonstrate the correct performance of the spiking controller in a step response experiment and apply it to a target pursuit task

    Processing EMG signals using reservoir computing on an event-based neuromorphic system

    Full text link
    Electromyography (EMG) signals carry information about the movements of skeleton muscles. EMG on-line processing and analysis can be applied to different types of human-machine interfaces and provide advantages to patient rehabilitation strategies in case of injuries or stroke. However, continuous monitoring and data collection produces large amounts of data and introduces a bottleneck for further processing by computing devices. Neuromorphic technology offers the possibility to process the data directly on the sensor side in real-time, and with very low power consumption. In this work we present the first steps toward the design of a neuromorphic event-based neural processing system that can be directly interfaced to surface EMG (sEMG) sensors for the on-line classification of the motor neuron output activities. We recorded the EMG signals related to two movements of open and closed hand gestures, converted them into asynchronous Address-Event Representation (AER) signals, provided them in input to a recurrent spiking neural network implemented on an ultra-low power neuromorphic chip, and analyzed the chip's response. We configured the recurrent network as a Liquid State Machine (LSM) as a means to classify the spatio-temporal data and evaluated the Separation Property (SP) of the liquid states for the two movements. We present experimental results which show how the activity of the silicon neurons can be encoded in state variables for which the average state distance is larger between two different gestures than it is between the same ones measured across different trials
    corecore